Buckle your belts!
We hit a milestone last weekend! The Belt Buckle is now in a functional alpha state!
"But Pete, what does that even mean?"
Well, it means that we now have basic functionality, no crippling bugs, and it's possible for the Belt Buckle to perform it's task as we expect it to. It's probably not well optimized, definitely not bug free, and still missing some extraneous features, but it works! If you're confused and not sure what I'm on about, check out this intro article.
We don't yet have air jets for the conveyor belt, so at the moment we are simulating them with LEDs. You can see a video of it in action below. When the part passes the "camera!" the Belt Buckle begins to track it, and activates the second LED.
I know it's not impressive looking, but it's still worth celebrating because we can now begin working on other parts of the software. Once the rest of the modules reach a usable alpha state, we can tie it all together into a functioning prototype. We'll get some real air jets and build a hopper to feed parts directly to the belt.
You had one job......
So lets take a look at how the Belt Buckle works, and what it actually does.
When a part moves past the camera, the Belt Buckle is informed by the server. This will happen before the neural network or the sorting engine do their job. Time is critical and I currently have no clue how far down the belt a part may move between being seen and being assigned to a bin. I cannot know until the rest of the software is written, so for now I'm preparing for the worst.
This means that the Belt Buckle starts tracking the part before it knows what bin the part is destined for. Another message has to come later to assign that part to a bin, hopefully before the part goes too far.
These tasks require a number of supporting systems. First we need an interface to send and receive commands. I devised a simple protocol for sending and receiving commands over a serial port using cute little 23 character long packets like this one:
<BXXXX231145223039CSUM>
We use the < character to at the start of each packet and the > character at the end. The first letter defines which command we are sending. Currently I have about a dozen different commands; most of them are for troubleshooting. The two important ones are 'A' and 'B'. The former command is sent when a part is seen by the camera, so the Belt Buckle can start tracking it. Once the neural network has identified the part, the sorting engine decides on a bin number and uses the 'B' command to tell the Belt Buckle.
Here's how the rest of the packet structure works:
Each packet contains and argument and a payload in addition to the command. Simply put, the command determines what action is taken, the arguments modify how the command is executed, and the payload is data which will be acted upon. Not all commands will need arguments or even payloads. Such packets will have X's in place of real data, because fixing the length of each packet simplifies the code.
The four checksum characters will be used for error detection. It will be implemented once the sorting engine is working.
I should also point out that tracking parts before they're assigned a bin number may also be unnecessary. Only experimentation will prove it for sure, but it's possible that our software will be fast enough to make our two-step process obsolete. In that event, we'll simply drop the 'A' command, and just send the 'B'.
Like a snowflake
When a new part is picked up by the camera, it gets it's very own unique identifier based on the current time. This way, it is practically impossible for two parts to get the same id number.
This 12-digit number is now used to track each part through the neural network, the sorting engine, and the Belt Buckle. Notice how this number was the payload in our packet example. When the Belt Buckle receives or transmits any command pertaining to a specific part, it uses this number.
Two heads are better than one
The conveyor belt has a rotary encoder mounted on it which measures the movement of the belt to a very high degree of precision. The encoder sends pulses to the Belt Buckle as the conveyor moves, in the neighborhood of 15 pulses per centimeter. At that rate, counting the pulses would occupy quite a lot of CPU time on the Belt Buckle because each pulse interrupts the CPU.
An interrupt stops whatever part of the code is currently being executed to run some other code instead. After running some numbers and doing some simple timing tests, I discovered that the encoder could interrupt the Belt Buckle several times while it processes a single command.
It's the software equivalent of your children yelling "MOM'S NOT HOME YET" every 2 seconds while she's away. Sure it keeps you informed, but you could be concentrating on more important things.
The solution is simple. We connected the encoder to a second arduino. My trusty old Duemilanove now runs a very small (and consequently very fast) program that counts encoder pulses and transmits them to the Belt Buckle whenever it asks for them. So now, we've posted the kids by the window and given them some crayons and oreos to shut them up. We can ask them "Is Mom home?" whenever it is convenient for us. The arduino is better than your kids though, because it will never purposefully reverse the functions of the oreos and crayons.
The encoder is mounted under the conveyor, and utilizes a large Lego wheel held gently against the belt, as you can see here:
Bringing it all together
As the conveyor moves along, the Belt Buckle checks the distance that it has moved against every part currently on the belt. Once a part has traveled the appropriate distance, the Belt Buckle turns on the air valve to blast the part off the belt into the bin. Finally, the Belt Buckle will report to the server that the part has been sorted and flush the part ID from it's memory.
The next step is to start working on the other modules, primarily the Taxidermist (camera and image cropping) and the Classifist(sorting engine). Our next blog post will probably come after a breakthrough in one of those. Peace!